2,239 research outputs found

    SD: a divergence-based estimation method for service demands in cloud systems

    Get PDF
    Estimating performance models parameters of cloudsystems presents several challenges due to the distributed natureof the applications, the chains of interactions of requests witharchitectural nodes, and the parallelism and coordination mech-anisms implemented within these systems.In this work, we present a new inference algorithm for modelparameters, calledstate divergence(SD) algorithm, to accuratelyestimate resource demands in a complex cloud application.Differently from existing approaches, SD attempts to minimizethe divergence between observed and modeled marginal stateprobabilities for individual nodes within an application, thereforerequiring the availability of probabilistic measures from both thesystem and the underpinning model.Validation against a case study using the Apache CassandraNoSQL database and random experiments show that SD can ac-curately predict demands and improve system behavior modelingand prediction

    Modelling the contribution of metacognitions and expectancies to problematic smartphone use

    Get PDF
    Abstract Background and aims In the current study we have sought to clarify the contribution of metacognitions concerning smartphone use relative to smartphone use expectancies in the relationship between well-established predisposing psychological factors and problematic smartphone use (PSU). We tested a model where psychological distress, impulsivity, and proneness to boredom predict metacognitions about smartphone use and smartphone use expectancies, which in turn predict PSU. Methods A sample of 535 participants (F = 71.2%; mean age = 27.38 ± 9.05 years) was recruited. Results The model accounted for 64% of the PSU variance and showed good fit indices (χ 2 = 16.01, df = 13, P = 0.24; RMSEA [90%CI] = 0.02 [0–0.05], CFI = 0.99; SRMR = 0.03). We found that: (i) when it comes to psychological distress and boredom proneness, negative metacognitions, and both positive and negative expectancies play a mediating role in the association with PSU, with negative metacognitions showing a dominant role; (ii) there is no overlap between positive expectancies and positive metacognitions, especially when it comes to smartphone use as a means for socializing; (iii) impulsivity did not show a significant effect on PSU Direct effects of the predictors on PSU were not found. Discussion and conclusions The current study found additional support for applying metacognitive theory to the understanding of PSU and highlight the dominant role of negative metacognitions about smartphone in predicting PSU

    Is depression a real risk factor for acute myocardial infarction mortality? A retrospective cohort study

    Get PDF
    Background: Depression has been associated with a higher risk of cardiovascular events and a higher mortality in patients with one or more comorbidities. This study investigated whether continuative use of antidepressants (ADs), considered as a proxy of a state of depression, prior to acute myocardial infarction (AMI) is associated with a higher mortality afterwards. The outcome to assess was mortality by AD use. Methods: A retrospective cohort study was conducted in the Veneto Region on hospital discharge records with a primary diagnosis of AMI in 2002-2015. Subsequent deaths were ascertained from mortality records. Drug purchases were used to identify AD users. A descriptive analysis was conducted on patients' demographics and clinical data. Survival after discharge was assessed with a Kaplan-Meier survival analysis and Cox's multiple regression model. Results: Among 3985 hospital discharge records considered, 349 (8.8%) patients were classified as AD users'. The mean AMI-related hospitalization rate was 164.8/100,000 population/year, and declined significantly from 204.9 in 2002 to 130.0 in 2015, but only for AD users (-40.4%). The mean overall follow-up was 4.64.1years. Overall, 523 patients (13.1%) died within 30days of their AMI. The remainder survived a mean 5.3 +/- 4.0years. After adjusting for potential confounders, use of antidepressants was independently associated with mortality (adj OR=1.75, 95% CI: 1.40-2.19). Conclusions: Our findings show that AD users hospitalized for AMI have a worse prognosis in terms of mortality. The use of routinely-available records can prove an efficient way to monitor trends in the state of health of specific subpopulations, enabling the early identification of AMI survivors with a history of antidepressant use

    SplitPlace: AI augmented splitting and placement of large-scale neural networks in mobile edge environments

    Get PDF
    In recent years, deep learning models have become ubiquitous in industry and academia alike. Deep neural networks can solve some of the most complex pattern-recognition problems today, but come with the price of massive compute and memory requirements. This makes the problem of deploying such large-scale neural networks challenging in resource-constrained mobile edge computing platforms, specifically in mission-critical domains like surveillance and healthcare. To solve this, a promising solution is to split resource-hungry neural networks into lightweight disjoint smaller components for pipelined distributed processing. At present, there are two main approaches to do this: semantic and layer-wise splitting. The former partitions a neural network into parallel disjoint models that produce a part of the result, whereas the latter partitions into sequential models that produce intermediate results. However, there is no intelligent algorithm that decides which splitting strategy to use and places such modular splits to edge nodes for optimal performance. To combat this, this work proposes a novel AI-driven online policy, SplitPlace, that uses Multi-Armed-Bandits to intelligently decide between layer and semantic splitting strategies based on the input task's service deadline demands. SplitPlace places such neural network split fragments on mobile edge devices using decision-aware reinforcement learning for efficient and scalable computing. Moreover, SplitPlace fine-tunes its placement engine to adapt to volatile environments. Our experiments on physical mobile-edge environments with real-world workloads show that SplitPlace can significantly improve the state-of-the-art in terms of average response time, deadline violation rate, inference accuracy, and total reward by up to 46, 69, 3 and 12 percent respectively

    A Queueing Network Model for Performance Prediction of Apache Cassandra

    Get PDF
    NoSQL databases such as Apache Cassandra have attracted large interest in recent years thanks to their high availability, scalability, flexibility and low latency. Still there is limited research work on performance engineering methods for NoSQL databases, which yet are needed since these systems are highly distributed and thus can incur significant cost/performance trade-offs. To address this need, we propose a novel queueing network model for the Cassandra NoSQL database aimed at supporting resource provisioning. The model defines explicitly key configuration parameters of Cassandra such as consistency levels and replication factor, allowing engineers to compare alternative system setups. Experimental results based on the YCSB benchmark indicate that, with a small amount of training for the estimation of its input param- eters, the proposed model achieves good predictive accuracy across different loads and consistency levels. The average performance errors of the model compared to the real results are between 6% and 10%. We also demonstrate the applicability of our model to other NoSQL databases and other possible utilisation of it

    VOLTAIRE - An EU V framework programme

    No full text

    MetaNet: automated dynamic selection of scheduling policies in cloud environments

    Get PDF
    Task scheduling is a well-studied problem in the context of optimizing the Quality of Service (QoS) of cloud computing environments. In order to sustain the rapid growth of computational demands, one of the most important QoS metrics for cloud schedulers is the execution cost. In this regard, several data-driven deep neural networks (DNNs) based schedulers have been proposed in recent years to allow scalable and efficient resource management in dynamic workload settings. However, optimal scheduling frequently relies on sophisticated DNNs with high computational needs implying higher execution costs. Further, even in non-stationary environments, sophisticated schedulers might not always be required and we could briefly rely on low-cost schedulers in the interest of cost-efficiency. Therefore, this work aims to solve the non-trivial meta problem of online dynamic selection of a scheduling policy using a surrogate model called MetaNet. Unlike traditional solutions with a fixed scheduling policy, MetaNet on-the-fly chooses a scheduler from a large set of DNN based methods to optimize task scheduling and execution costs in tandem. Compared to state-of-the-art DNN schedulers, this allows for improvement in execution costs, energy consumption, response time and service level agreement violations by up to 11, 43, 8 and 13 percent, respectively

    SimTune: bridging the simulator reality gap for resource management in edge-cloud computing

    Get PDF
    Industries and services are undergoing an Internet of Things centric transformation globally, giving rise to an explosion of multi-modal data generated each second. This, with the requirement of low-latency result delivery, has led to the ubiquitous adoption of edge and cloud computing paradigms. Edge computing follows the data gravity principle, wherein the computational devices move closer to the end-users to minimize data transfer and communication times. However, large-scale computation has exacerbated the problem of efficient resource management in hybrid edge-cloud platforms. In this regard, data-driven models such as deep neural networks (DNNs) have gained popularity to give rise to the notion of edge intelligence. However, DNNs face significant problems of data saturation when fed volatile data. Data saturation is when providing more data does not translate to improvements in performance. To address this issue, prior work has leveraged coupled simulators that, akin to digital twins, generate out-of-distribution training data alleviating the data-saturation problem. However, simulators face the reality-gap problem, which is the inaccuracy in the emulation of real computational infrastructure due to the abstractions in such simulators. To combat this, we develop a framework, SimTune, that tackles this challenge by leveraging a low-fidelity surrogate model of the high-fidelity simulator to update the parameters of the latter, so to increase the simulation accuracy. This further helps co-simulated methods to generalize to edge-cloud configurations for which human encoded parameters are not known apriori. Experiments comparing SimTune against state-of-the-art data-driven resource management solutions on a real edge-cloud platform demonstrate that simulator tuning can improve quality of service metrics such as energy consumption and response time by up to 14.7% and 7.6% respectively

    CILP: Co-simulation based imitation learner for dynamic resource provisioning in cloud computing environments

    Get PDF
    Intelligent Virtual Machine (VM) provisioning is central to cost and resource efficient computation in cloud computing environments. As bootstrapping VMs is time-consuming, a key challenge for latency-critical tasks is to predict future workload demands to provision VMs proactively. However, existing AI-based solutions tend to not holistically consider all crucial aspects such as provisioning overheads, heterogeneous VM costs and Quality of Service (QoS) of the cloud system. To address this, we propose a novel method, called CILP, that formulates the VM provisioning problem as two sub-problems of prediction and optimization, where the provisioning plan is optimized based on predicted workload demands. CILP leverages a neural network as a surrogate model to predict future workload demands with a co-simulated digital-twin of the infrastructure to compute QoS scores. We extend the neural network to also act as an imitation learner that dynamically decides the optimal VM provisioning plan. A transformer based neural model reduces training and inference overheads while our novel two-phase decision making loop facilitates in making informed provisioning decisions. Crucially, we address limitations of prior work by including resource utilization, deployment costs and provisioning overheads to inform the provisioning decisions in our imitation learning framework. Experiments with three public benchmarks demonstrate that CILP gives up to 22% higher resource utilization, 14% higher QoS scores and 44% lower execution costs compared to the current online and offline optimization based state-of-the-art methods

    DRAGON: Decentralized fault tolerance in edge federations

    Get PDF
    Edge Federation is a new computing paradigm that seamlessly interconnects the resources of multiple edge service providers. A key challenge in such systems is the deployment of latency-critical and AI based resource-intensive applications in constrained devices. To address this challenge, we propose a novel memory-efficient deep learning based model, namely generative optimization networks (GON). Unlike GANs, GONs use a single network to both discriminate input and generate samples, significantly reducing their memory footprint. Leveraging the low memory footprint of GONs, we propose a decentralized fault-tolerance method called DRAGON that runs simulations (as per a digital modeling twin) to quickly predict and optimize the performance of the edge federation. Extensive experiments with real-world edge computing benchmarks on multiple Raspberry-Pi based federated edge configurations show that DRAGON can outperform the baseline methods in fault-detection and Quality of Service (QoS) metrics. Specifically, the proposed method gives higher F1 scores for fault-detection than the best deep learning (DL) method, while consuming lower memory than the heuristic methods. This allows for improvement in energy consumption, response time and service level agreement violations by up to 74, 63 and 82 percent, respectively
    • …
    corecore